Search Results

Documents authored by Raman, Rajeev


Document
Weighted Ancestors in Suffix Trees Revisited

Authors: Djamal Belazzougui, Dmitry Kosolobov, Simon J. Puglisi, and Rajeev Raman

Published in: LIPIcs, Volume 191, 32nd Annual Symposium on Combinatorial Pattern Matching (CPM 2021)


Abstract
The weighted ancestor problem is a well-known generalization of the predecessor problem to trees. It is known to require Ω(log log n) time for queries provided 𝒪(n polylog n) space is available and weights are from [0..n], where n is the number of tree nodes. However, when applied to suffix trees, the problem, surprisingly, admits an 𝒪(n)-space solution with constant query time, as was shown by Gawrychowski, Lewenstein, and Nicholson (Proc. ESA 2014). This variant of the problem can be reformulated as follows: given the suffix tree of a string s, we need a data structure that can locate in the tree any substring s[p..q] of s in 𝒪(1) time (as if one descended from the root reading s[p..q] along the way). Unfortunately, the data structure of Gawrychowski et al. has no efficient construction algorithm, limiting its wider usage as an algorithmic tool. In this paper we resolve this issue, describing a data structure for weighted ancestors in suffix trees with constant query time and a linear construction algorithm. Our solution is based on a novel approach using so-called irreducible LCP values.

Cite as

Djamal Belazzougui, Dmitry Kosolobov, Simon J. Puglisi, and Rajeev Raman. Weighted Ancestors in Suffix Trees Revisited. In 32nd Annual Symposium on Combinatorial Pattern Matching (CPM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 191, pp. 8:1-8:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{belazzougui_et_al:LIPIcs.CPM.2021.8,
  author =	{Belazzougui, Djamal and Kosolobov, Dmitry and Puglisi, Simon J. and Raman, Rajeev},
  title =	{{Weighted Ancestors in Suffix Trees Revisited}},
  booktitle =	{32nd Annual Symposium on Combinatorial Pattern Matching (CPM 2021)},
  pages =	{8:1--8:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-186-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{191},
  editor =	{Gawrychowski, Pawe{\l} and Starikovskaya, Tatiana},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2021.8},
  URN =		{urn:nbn:de:0030-drops-139594},
  doi =		{10.4230/LIPIcs.CPM.2021.8},
  annote =	{Keywords: suffix tree, weighted ancestors, irreducible LCP, deterministic substring hashing}
}
Document
Fast and Simple Compact Hashing via Bucketing

Authors: Dominik Köppl, Simon J. Puglisi, and Rajeev Raman

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
Compact hash tables store a set S of n key-value pairs, where the keys are from the universe U = {0,…,u-1}, and the values are v-bit integers, in close to B(u, n) + nv bits of space, where {b(u, n)} = log₂ binom(u,n) is the information-theoretic lower bound for representing the set of keys in S, and support operations insert, delete and lookup on S. Compact hash tables have received significant attention in recent years, and approaches dating back to Cleary [IEEE T. Comput, 1984], as well as more recent ones have been implemented and used in a number of applications. However, the wins on space usage of these approaches are outweighed by their slowness relative to conventional hash tables. In this paper, we demonstrate that compact hash tables based upon a simple idea of bucketing practically outperform existing compact hash table implementations in terms of memory usage and construction time, and existing fast hash table implementations in terms of memory usage (and sometimes also in terms of construction time). A related notion is that of a compact Hash ID map, which stores a set Ŝ of n keys from U, and implicitly associates each key in Ŝ with a unique value (its ID), chosen by the data structure itself, which is an integer of magnitude O(n), and supports inserts and lookups on Ŝ, while using close to B(u,n) bits. One of our approaches is suitable for use as a compact Hash ID map.

Cite as

Dominik Köppl, Simon J. Puglisi, and Rajeev Raman. Fast and Simple Compact Hashing via Bucketing. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{koppl_et_al:LIPIcs.SEA.2020.7,
  author =	{K\"{o}ppl, Dominik and Puglisi, Simon J. and Raman, Rajeev},
  title =	{{Fast and Simple Compact Hashing via Bucketing}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{7:1--7:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.7},
  URN =		{urn:nbn:de:0030-drops-120817},
  doi =		{10.4230/LIPIcs.SEA.2020.7},
  annote =	{Keywords: compact hashing, hash table, separate chaining}
}
Document
Complete Volume
LIPIcs, Volume 75, SEA'17, Complete Volume

Authors: Costas S. Iliopoulos, Solon P. Pissis, Simon J. Puglisi, and Rajeev Raman

Published in: LIPIcs, Volume 75, 16th International Symposium on Experimental Algorithms (SEA 2017)


Abstract
LIPIcs, Volume 75, SEA'17, Complete Volume

Cite as

16th International Symposium on Experimental Algorithms (SEA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 75, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Proceedings{iliopoulos_et_al:LIPIcs.SEA.2017,
  title =	{{LIPIcs, Volume 75, SEA'17, Complete Volume}},
  booktitle =	{16th International Symposium on Experimental Algorithms (SEA 2017)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-036-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{75},
  editor =	{Iliopoulos, Costas S. and Pissis, Solon P. and Puglisi, Simon J. and Raman, Rajeev},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2017},
  URN =		{urn:nbn:de:0030-drops-76644},
  doi =		{10.4230/LIPIcs.SEA.2017},
  annote =	{Keywords: Analysis of Algorithms and Problem Complexity, Algorithms}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization, External Reviewers

Authors: Costas S. Iliopoulos, Solon P. Pissis, Simon J. Puglisi, and Rajeev Raman

Published in: LIPIcs, Volume 75, 16th International Symposium on Experimental Algorithms (SEA 2017)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization, External Reviewers

Cite as

16th International Symposium on Experimental Algorithms (SEA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 75, pp. 0:i-0:xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{iliopoulos_et_al:LIPIcs.SEA.2017.0,
  author =	{Iliopoulos, Costas S. and Pissis, Solon P. and Puglisi, Simon J. and Raman, Rajeev},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization, External Reviewers}},
  booktitle =	{16th International Symposium on Experimental Algorithms (SEA 2017)},
  pages =	{0:i--0:xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-036-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{75},
  editor =	{Iliopoulos, Costas S. and Pissis, Solon P. and Puglisi, Simon J. and Raman, Rajeev},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2017.0},
  URN =		{urn:nbn:de:0030-drops-76006},
  doi =		{10.4230/LIPIcs.SEA.2017.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization, External Reviewers}
}
Document
Asymptotically Optimal Encodings for Range Selection

Authors: Gonzalo Navarro, Rajeev Raman, and Srinivasa Rao Satti

Published in: LIPIcs, Volume 29, 34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014)


Abstract
We consider the problem of preprocessing an array A[1..n] to answer range selection and range top-k queries. Given a query interval [i..j] and a value k, the former query asks for the position of the k-th largest value in A[i..j], whereas the latter asks for the positions of all the k largest values in A[i..j]. We consider the encoding} version of the problem, where A is not available at query time, and an upper bound kappa on k, the rank that is to be selected, is given at construction time. We obtain data structures with asymptotically optimal size and query time on a RAM model with word size Theta(lg(n)): our structures use O(n*lg(kappa)) bits and answer range selection queries in time O(1+lg(k) / lg(lg(n))) and range top-k queries in time O(k), for any k <= kappa.

Cite as

Gonzalo Navarro, Rajeev Raman, and Srinivasa Rao Satti. Asymptotically Optimal Encodings for Range Selection. In 34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 29, pp. 291-301, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{navarro_et_al:LIPIcs.FSTTCS.2014.291,
  author =	{Navarro, Gonzalo and Raman, Rajeev and Satti, Srinivasa Rao},
  title =	{{Asymptotically Optimal Encodings for Range Selection}},
  booktitle =	{34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014)},
  pages =	{291--301},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-77-4},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{29},
  editor =	{Raman, Venkatesh and Suresh, S. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2014.291},
  URN =		{urn:nbn:de:0030-drops-48502},
  doi =		{10.4230/LIPIcs.FSTTCS.2014.291},
  annote =	{Keywords: Data Structures, Order Statistics, Succinct Data Structures, Space-efficient Data Structures}
}
Document
More Haste, Less Waste: Lowering the Redundancy in Fully Indexable Dictionaries

Authors: Roberto Grossi, Alessio Orlandi, Rajeev Raman, and S. Srinivasa Rao

Published in: LIPIcs, Volume 3, 26th International Symposium on Theoretical Aspects of Computer Science (2009)


Abstract
We consider the problem of representing, in a compressed format, a bit-vector~$S$ of $m$ bits with $n$ $\mathbf{1}$s, supporting the following operations, where $b \in \{ \mathbf{0}, \mathbf{1} \}$: \begin{itemize} \item $\mathtt{rank}_b(S,i)$ returns the number of occurrences of bit $b$ in the prefix $S\left[1..i\right]$; \item $\mathtt{select}_b(S,i)$ returns the position of the $i$th occurrence of bit $b$ in $S$. \end{itemize} Such a data structure is called \emph{fully indexable dictionary (\textsc{fid})} [Raman, Raman, and Rao, 2007], and is at least as powerful as predecessor data structures. Viewing $S$ as a set $X = \{ x_1, x_2, \ldots, x_n \}$ of $n$ distinct integers drawn from a universe $[m] = \{1, \ldots, m\}$, the predecessor of integer $y \in [m]$ in $X$ is given by $\ensuremath{\mathtt{select}^{}_1}(S, \ensuremath{\mathtt{rank}_1}(S,y-1))$. {\textsc{fid}}s have many applications in succinct and compressed data structures, as they are often involved in the construction of succinct representation for a variety of abstract data types. Our focus is on space-efficient {\textsc{fid}}s on the \textsc{ram} model with word size $\Theta(\lg m)$ and constant time for all operations, so that the time cost is independent of the input size. Given the bitstring $S$ to be encoded, having length $m$ and containing $n$ ones, the minimal amount of information that needs to be stored is $B(n,m) = \lceil \log {{m}\choose{n}} \rceil$. The state of the art in building a \textsc{fid}\ for~$S$ is given in~\mbox{}[P\v{a}tra\c{s}cu, 2008] using $B(m,n)+O( m / ( (\log m/ t) ^t) ) + O(m^{3/4}) $ bits, to support the operations in $O(t)$ time. Here, we propose a parametric data structure exhibiting a time/space trade-off such that, for any real constants $0 < \delta \leq 1/2$, $0 < \varepsilon \leq 1$, and integer $s > 0$, it uses \[ B(n,m) + O\left(n^{1+\delta} + n \left(\frac{m}{n^s}\right)^\varepsilon\right) \] bits and performs all the operations in time $O(s\delta^{-1} + \varepsilon^{-1})$. The improvement is twofold: our redundancy can be lowered parametrically and, fixing $s = O(1)$, we get a constant-time \textsc{fid}\ whose space is $B(n,m) + O(m^\varepsilon/\mathrm{poly}(n))$ bits, for sufficiently large $m$. This is a significant improvement compared to the previous bounds for the general case.

Cite as

Roberto Grossi, Alessio Orlandi, Rajeev Raman, and S. Srinivasa Rao. More Haste, Less Waste: Lowering the Redundancy in Fully Indexable Dictionaries. In 26th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 3, pp. 517-528, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{grossi_et_al:LIPIcs.STACS.2009.1847,
  author =	{Grossi, Roberto and Orlandi, Alessio and Raman, Rajeev and Rao, S. Srinivasa},
  title =	{{More Haste, Less Waste: Lowering the Redundancy in Fully Indexable Dictionaries}},
  booktitle =	{26th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{517--528},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-09-5},
  ISSN =	{1868-8969},
  year =	{2009},
  volume =	{3},
  editor =	{Albers, Susanne and Marion, Jean-Yves},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2009.1847},
  URN =		{urn:nbn:de:0030-drops-18470},
  doi =		{10.4230/LIPIcs.STACS.2009.1847},
  annote =	{Keywords: }
}
Document
Computing Minimum Spanning Trees with Uncertainty

Authors: Michael Hoffmann, Thomas Erlebach, Danny Krizanc, Matús Mihal'ák, and Rajeev Raman

Published in: LIPIcs, Volume 1, 25th International Symposium on Theoretical Aspects of Computer Science (2008)


Abstract
We consider the minimum spanning tree problem in a setting where information about the edge weights of the given graph is uncertain. Initially, for each edge $e$ of the graph only a set $A_e$, called an uncertainty area, that contains the actual edge weight $w_e$ is known. The algorithm can `update' $e$ to obtain the edge weight $w_e in A_e$. The task is to output the edge set of a minimum spanning tree after a minimum number of updates. An algorithm is $k$-update competitive if it makes at most $k$ times as many updates as the optimum. We present a $2$-update competitive algorithm if all areas $A_e$ are open or trivial, which is the best possible among deterministic algorithms. The condition on the areas $A_e$ is to exclude degenerate inputs for which no constant update competitive algorithm can exist. Next, we consider a setting where the vertices of the graph correspond to points in Euclidean space and the weight of an edge is equal to the distance of its endpoints. The location of each point is initially given as an uncertainty area, and an update reveals the exact location of the point. We give a general relation between the edge uncertainty and the vertex uncertainty versions of a problem and use it to derive a $4$-update competitive algorithm for the minimum spanning tree problem in the vertex uncertainty model. Again, we show that this is best possible among deterministic algorithms.

Cite as

Michael Hoffmann, Thomas Erlebach, Danny Krizanc, Matús Mihal'ák, and Rajeev Raman. Computing Minimum Spanning Trees with Uncertainty. In 25th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 1, pp. 277-288, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{hoffmann_et_al:LIPIcs.STACS.2008.1358,
  author =	{Hoffmann, Michael and Erlebach, Thomas and Krizanc, Danny and Mihal'\'{a}k, Mat\'{u}s and Raman, Rajeev},
  title =	{{Computing Minimum Spanning Trees with Uncertainty}},
  booktitle =	{25th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{277--288},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-06-4},
  ISSN =	{1868-8969},
  year =	{2008},
  volume =	{1},
  editor =	{Albers, Susanne and Weil, Pascal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2008.1358},
  URN =		{urn:nbn:de:0030-drops-13581},
  doi =		{10.4230/LIPIcs.STACS.2008.1358},
  annote =	{Keywords: Algorithms and data structures; Current challenges: mobile and net computing}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail